随着复杂的机器学习模型越来越多地用于银行,交易或信用评分等敏感应用中,对可靠的解释机制的需求越来越不断增长。局部特征归因方法已成为事后和模型不足的解释的流行技术。但是,归因方法通常假设一个固定环境,其中预测模型已经受过训练并保持稳定。结果,通常不清楚本地归因在现实,不断发展的设置(例如流和在线应用程序)中的行为。在本文中,我们讨论了时间变化对本地特征归因的影响。特别是,我们表明,每次更新预测模型或概念漂移都会改变数据生成分布时,本地归因都会变得过时。因此,数据流中的局部特征归因只有在结合一种机制结合使用的机制时才能提供高解释性功能,该机制使我们能够随着时间的推移检测和响应局部变化。为此,我们介绍了Cdleeds,这是一个灵活而模型的不合理框架,用于检测局部变化和概念漂移。 CDEREDS是基于归因的解释技术的直观扩展,以识别过时的局部归因并实现更多针对性的重新计算。在实验中,我们还表明,所提出的框架可以可靠地检测到本地和全球概念漂移。因此,我们的工作在在线机器学习中有助于更有意义,更强大的解释性。
translated by 谷歌翻译
在临床工作流程中成功部署AI的计算机辅助诊断(CAD)系统的一个主要障碍是它们缺乏透明决策。虽然常用可解释的AI方法提供了一些对不透明算法的洞察力,但除了高度训练的专家外,这种解释通常是复杂的,而不是易于理解的。关于皮肤病图像的皮肤病病变恶性的决定的解释需要特别清晰,因为潜在的医疗问题定义本身是模棱两可的。这项工作提出了exaid(可解释的ai用于皮肤科),是生物医学图像分析的新框架,提供了由易于理解的文本解释组成的多模态概念的解释,该概念由可视地图证明预测的视觉映射。 Exap依赖于概念激活向量,将人类概念映射到潜在空间中的任意深度学习模型学习的人,以及概念本地化地图,以突出输入空间中的概念。然后,这种相关概念的识别将用于构建由概念 - 明智地点信息补充的细粒度文本解释,以提供全面和相干的多模态解释。所有信息都在诊断界面中全面呈现,用于临床常规。教育模式为数据和模型探索提供数据集级别解释统计和工具,以帮助医学研究和教育。通过严谨的exaid定量和定性评估,即使在错误的预测情况下,我们展示了CAD辅助情景的多模态解释的效用。我们认为突然将为皮肤科医生提供一种有效的筛查工具,他们都理解和信任。此外,它将是其他生物医学成像领域的类似应用的基础。
translated by 谷歌翻译
大多数情况下的对象识别已被接近作为一种热门问题,这些问题对待课程是离散和无关的。必须将每个图像区域分配给一组对象的一个​​成员,包括背景类,忽略对象类型中的任何相似之处。在这项工作中,我们比较了从一种热处理中学到的类嵌入式的错误统计数据,其中来自自然语言处理或知识图中广泛应用于开放世界对象检测的语义结构嵌入。在多个知识嵌入和距离指标上的广泛实验结果表明基于知识的类表示,与挑战COCO和CITYCAPES对象检测基准相比,与一个热方法相比,与一个热方法相比,在表现上进行了更多的语义接地错误分类。通过提出基于Keypoint的基于和基于变换器的对象检测架构的知识嵌入式设计,我们将研究结果概括为多个物体检测架构。
translated by 谷歌翻译
异常气道扩张,称为牵引支气管扩张,是特发性肺纤维化(IPF)的典型特征。体积计算断层扫描(CT)成像捕获IPF中逐渐变细的丢失。我们假设气道异常的自动化量化可以提供IPF疾病程度和严重程度的估算。我们提出了一种自动化计算管道,系统地将气道树木从基于深度学习的气道分割中划分到其裂片和世代分支,从而从胸部CT获得气道结构措施。重要的是,透气阻止通过厚波传播的杂散气道分支的发生,并通过图表搜索去除气道树中的环,克服现有气道骨架算法的限制。在14名健康参与者和14名IPF患者之间比较了透气段(跨空间)和透气曲线曲线之间的逐渐变化。 IPF患者中,Airway interberering显着降低,与健康对照相比,Airway曲线曲调显着增加。差异在下叶中最大标记,符合IPF相关损伤的典型分布。透气是一种开源管道,避免了现有的气道定量算法的限制,并具有临床解释性。自动化气道测量可能具有作为IPF严重程度和疾病程度的新型成像生物标志物。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
This article concerns Bayesian inference using deep linear networks with output dimension one. In the interpolating (zero noise) regime we show that with Gaussian weight priors and MSE negative log-likelihood loss both the predictive posterior and the Bayesian model evidence can be written in closed form in terms of a class of meromorphic special functions called Meijer-G functions. These results are non-asymptotic and hold for any training dataset, network depth, and hidden layer widths, giving exact solutions to Bayesian interpolation using a deep Gaussian process with a Euclidean covariance at each layer. Through novel asymptotic expansions of Meijer-G functions, a rich new picture of the role of depth emerges. Specifically, we find that the posteriors in deep linear networks with data-independent priors are the same as in shallow networks with evidence maximizing data-dependent priors. In this sense, deep linear networks make provably optimal predictions. We also prove that, starting from data-agnostic priors, Bayesian model evidence in wide networks is only maximized at infinite depth. This gives a principled reason to prefer deeper networks (at least in the linear case). Finally, our results show that with data-agnostic priors a novel notion of effective depth given by \[\#\text{hidden layers}\times\frac{\#\text{training data}}{\text{network width}}\] determines the Bayesian posterior in wide linear networks, giving rigorous new scaling laws for generalization error.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
Springs are efficient in storing and returning elastic potential energy but are unable to hold the energy they store in the absence of an external load. Lockable springs use clutches to hold elastic potential energy in the absence of an external load but have not yet been widely adopted in applications, partly because clutches introduce design complexity, reduce energy efficiency, and typically do not afford high-fidelity control over the energy stored by the spring. Here, we present the design of a novel lockable compression spring that uses a small capstan clutch to passively lock a mechanical spring. The capstan clutch can lock up to 1000 N force at any arbitrary deflection, unlock the spring in less than 10 ms with a control force less than 1 % of the maximal spring force, and provide an 80 % energy storage and return efficiency (comparable to a highly efficient electric motor operated at constant nominal speed). By retaining the form factor of a regular spring while providing high-fidelity locking capability even under large spring forces, the proposed design could facilitate the development of energy-efficient spring-based actuators and robots.
translated by 谷歌翻译